seismic data interpolation via a series of new regularizing functions

نویسندگان

برهان توکلی

گروه فیزیک زمین، موسسه ژئوفیزیک دانشگاه تهران علی غلامی

گروه فیزیک زمین، موسسه ژئوفیزیک دانشگاه تهران حمیدرضا سیاه کوهی

گروه فیزیک زمین، موسسه ژئوفیزیک دانشگاه تهران

چکیده

natural signals are continues, therefore, digitizing is an essential task enabling us to use computing tools to process them. according to the nyquist/shannon sampling theory, the sampling frequency must be at least twice the maximum frequency contained in the signal which is being sampled; otherwise, some high frequencies may be aliased and result in a bad reconstruction. the nyquist sampling rate makes it possible to reconstruct the original signal exactly from its acquired samples. to enhance the efficiency of sampling process, a procedure is to use a high sampling rate. but the huge volume of generated data by this approach is a major challenge in many fields, like seismic exploration, and moreover, sometimes the sampling equipment cannot handle the broad frequency band.  seismic data acquisition includes sampling in time and spatial directions of a waveform that is generated by some sources like dynamite. sampling should be done according to a regular pattern of receivers. nevertheless, generally due to some acquisition obstacles seismic data sets are irregularly sampled in spatial direction(s). this irregularity causes a low quality seismic images that contain artifacts and missing traces. one of the approaches that have been developed to deal with this defect is interpolation of the acquired data according to a regular grid. through the interpolation we can achieve an estimation of the fully sampled desired signal. this approach can also be as a tool to design an acquisition geometry which is sparser and results in more cost effective survey. compressive sensing (cs) theory has been developed helping us to sample data below nyquist sampling rate while being able to reconstruct them by considering the solution of an optimization problem. this theory claims that the signals/images that can be presented sparsely under a pre-specified basis or frame can be reconstructed accurately from a few numbers of its samples. the principle of the cs is based on the tikhonov regularization like equation (eq. 1) which utilizes sparsifying regularization terms. in equation (1), the cs sampling operator, , contains three elements: (i) a sparsifying transform c which provides a sparse presentation of signals/images according to the used basis, (ii) measurement matrix m which for seismic issue is identity matrix, and (iii) under sampling operator s which is incoherent with sparsifying operator c. curvelet transform contains a frame set whose elements have a great correlation with curve-like reflection events presented in seismic data and can provide a sparse presentation of seismic images. the under sampling scheme used in this paper is jitter that allows controlling the maximum gap size between known traces. another commonly used under sampling scheme is gaussian random or binary random. since under sampling appearance in frequency domain is a gaussian random noise, the interpolation problem can be treated as a nonlinear de-noising problem. curvelet frames are an optimal choice for this purpose. the sparsity regularization plays a leading role in cs theory. this approach has also been effectively applied on other problems like de-noising and de-convolution. there are a wide range of functions that can impose sparsity in regularization equation. the performance of these functions to interpolate an incomplete data is related to their ability in coherency with initial model properties. there are a variety of potential functions and the l1-norm is the well-known and commonly used of them. but still a comprehensive study to find out which of them is more efficient for seismic image reconstruction is necessary. this defect is because of absence of a general potential function. here we use a general potential function which enables us to compare the efficiency of a wide range of potential functions and find the optimum one for our problem. this regularization function incudes lp-norm functions and others as its especial cases which are presented in table 1. this general function covers both convex and non-convex regularization functions. in this paper we use the potential function to compare the efficiency of different approaches in cs algorithm. through solving regularization problems a controversial part is setting the best regularization parameter, . here due to redundancy of curvelet transform, assigning a proper parameter will face some difficulties. many approaches like l-curve, stain’s unbiased risk estimate (sure), and generalized cross validation (gcv), face some difficulties in finding this parameter. therefore, we inclined to use some nonlinear approaches, such as ngcv (nonlinear gcv) and wsure (weighted sure). the efficiency of the mentioned methods for estimating regularization parameter and choosing the best potential function is evaluated by considering a synthetic noisy seismic image. by under-sampling this image and removing more than 60% of its traces, the initial/observed model will be reconstructed. this imperfect image serves as our acquired seismic data. in solving equation (1) we use a forward-backward splitting recursion algorithm. finally through this algorithm we could reach the optimum potential function and a method to estimate the regularization parameter.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Cosparse seismic data interpolation

Many modern seismic data interpolation and redatuming algorithms rely on the promotion of transform-domain sparsity for high-quality results. Amongst the large diversity of methods and different ways of realizing sparse reconstruction lies a central question that often goes unaddressed: is it better for the transform-domain sparsity to be achieved through explicit construction of sparse represe...

متن کامل

Series expansion of Wiener integrals via block pulse functions

In this paper, a suitable numerical method based on block pulse functions is introduced to approximate the Wiener integrals which the exact solution of them is not exist or it may be so hard to find their exact solutions. Furthermore, the error analysis of this method is given. Some numerical examples are provided which show that the approximation method has a good degree of accuracy. The main ...

متن کامل

Interferometric interpolation of missing seismic data

Interpolation of missing seismic data (such as missing nearoffset data, gaps, etc.) is an important issue in seismic surveys. Here we show that interferometry can be used to fill in some gaps in the data. The interferometric method creates virtual traces in the gaps by crosscorrelation of traces in a shot gather followed by summation over all shot positions. Compared to other interpolation meth...

متن کامل

new semigroup compactifications via the enveloping semigroups of associated flows

this thesis deals with the construction of some function algebras whose corresponding semigroup compactification are universal with respect to some properies of their enveloping semigroups. the special properties are of beigan a left zero, a left simple, a group, an inflation of the right zero, and an inflation of the rectangular band.

15 صفحه اول

Patching and micropatching in seismic data interpolation

I interpolate CMP gathers with PEFs arranged on a dense, radial grid. The radial grid facilitates preconditioning by radial smoothing, and enables the use of relatively large grid cells, which we refer to as micropatches. Even when the micropatches contain enough data samples that the PEF calculation problem appears overdetermined, radial smoothing still noticeably improves the interpolation, p...

متن کامل

A low rank based seismic data interpolation via frequency-patches transform and low rank space projection

We propose a new algorithm to improve computational efficiency for low rank based interpolation. The interpolation is carried out in the frequency spatial domain where each frequency slice is first transferred to the frequency-patches domain. A nice feature of this domain is that the number of non-zero singular values can be better related to seismic events, which favors low rank reduction. Dur...

متن کامل

منابع من

با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید


عنوان ژورنال:
فیزیک زمین و فضا

جلد ۴۰، شماره ۲، صفحات ۵۹-۶۸

کلمات کلیدی
natural signals are continues therefore digitizing is an essential task enabling us to use computing tools to process them. according to the nyquist/shannon sampling theory the sampling frequency must be at least twice the maximum frequency contained in the signal which is being sampled; otherwise some high frequencies may be aliased and result in a bad reconstruction. the nyquist sampling rate makes it possible to reconstruct the original signal exactly from its acquired samples. to enhance the efficiency of sampling process a procedure is to use a high sampling rate. but the huge volume of generated data by this approach is a major challenge in many fields like seismic exploration and moreover sometimes the sampling equipment cannot handle the broad frequency band.  seismic data acquisition includes sampling in time and spatial directions of a waveform that is generated by some sources like dynamite. sampling should be done according to a regular pattern of receivers. nevertheless generally due to some acquisition obstacles seismic data sets are irregularly sampled in spatial direction(s). this irregularity causes a low quality seismic images that contain artifacts and missing traces. one of the approaches that have been developed to deal with this defect is interpolation of the acquired data according to a regular grid. through the interpolation we can achieve an estimation of the fully sampled desired signal. this approach can also be as a tool to design an acquisition geometry which is sparser and results in more cost effective survey. compressive sensing (cs) theory has been developed helping us to sample data below nyquist sampling rate while being able to reconstruct them by considering the solution of an optimization problem. this theory claims that the signals/images that can be presented sparsely under a pre specified basis or frame can be reconstructed accurately from a few numbers of its samples. the principle of the cs is based on the tikhonov regularization like equation (eq. 1) which utilizes sparsifying regularization terms. in equation (1) the cs sampling operator contains three elements: (i) a sparsifying transform c which provides a sparse presentation of signals/images according to the used basis (ii) measurement matrix m which for seismic issue is identity matrix and (iii) under sampling operator s which is incoherent with sparsifying operator c. curvelet transform contains a frame set whose elements have a great correlation with curve like reflection events presented in seismic data and can provide a sparse presentation of seismic images. the under sampling scheme used in this paper is jitter that allows controlling the maximum gap size between known traces. another commonly used under sampling scheme is gaussian random or binary random. since under sampling appearance in frequency domain is a gaussian random noise the interpolation problem can be treated as a nonlinear de noising problem. curvelet frames are an optimal choice for this purpose. the sparsity regularization plays a leading role in cs theory. this approach has also been effectively applied on other problems like de noising and de convolution. there are a wide range of functions that can impose sparsity in regularization equation. the performance of these functions to interpolate an incomplete data is related to their ability in coherency with initial model properties. there are a variety of potential functions and the l1 norm is the well known and commonly used of them. but still a comprehensive study to find out which of them is more efficient for seismic image reconstruction is necessary. this defect is because of absence of a general potential function. here we use a general potential function which enables us to compare the efficiency of a wide range of potential functions and find the optimum one for our problem. this regularization function incudes lp norm functions and others as its especial cases which are presented in table 1. this general function covers both convex and non convex regularization functions. in this paper we use the potential function to compare the efficiency of different approaches in cs algorithm. through solving regularization problems a controversial part is setting the best regularization parameter . here due to redundancy of curvelet transform assigning a proper parameter will face some difficulties. many approaches like l curve stain’s unbiased risk estimate (sure) and generalized cross validation (gcv) face some difficulties in finding this parameter. therefore we inclined to use some nonlinear approaches such as ngcv (nonlinear gcv) and wsure (weighted sure). the efficiency of the mentioned methods for estimating regularization parameter and choosing the best potential function is evaluated by considering a synthetic noisy seismic image. by under sampling this image and removing more than 60% of its traces the initial/observed model will be reconstructed. this imperfect image serves as our acquired seismic data. in solving equation (1) we use a forward backward splitting recursion algorithm. finally through this algorithm we could reach the optimum potential function and a method to estimate the regularization parameter.

میزبانی شده توسط پلتفرم ابری doprax.com

copyright © 2015-2023